perm filename A02[AOO,RWF] blob
sn#871765 filedate 1989-03-31 generic text, type C, neo UTF8
COMMENT ā VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 \input macro[1,mps]
C00008 ENDMK
Cā;
\input macro[1,mps]
\magnification\magstephalf
\baselineskip 14pt
\noindent A02[AOO,rwf]
\bigskip
About twenty years ago, Hopcroft and Ullman wrote a classic text on idealized
digital machines (automata) and the languages they accept. It became the
standard for introductory courses in the theory of computability. Other texts
since then (Lewis and Papadimitruri, Cohen, Savitch, Harrison, ...) have
been paraphrases. The Hopcroft and Ullman text largely studies three
machines: the finite automaton, the pushdown acceptor, and the one-tape
deterministic Turing machine. They are separately defined, and studied in
separate chapters.
After teaching out of Hopcroft and Ullman five or six times, I realized I
could give a general definition for machines made up of components such as
keyboard-like inputs, printer-like outputs, CPU-like finite controls,
random-access memories, stacks, counters, queues, tapes, oracles, as well
as others we haven't thought of yet. I could give a general definition of
a program for a machine, and a computation of that program. I could give a
general definition of what it means for a program on one machine to simulate
a program on another machine, using a general definition of representation
to show that the configurations of the simulated machine are represented by
certan configurations of the simulating machine. Having established a
general simulation theorem, all the classic results about simulation are special
cases broken down into very small subproblems.
I also defined a way to connect the output of one program to the input of
another on a possibly different machine, resulting in a UNIX-like way to
compose machines. The sets of programs and machines becomes an algebraic
structure, with operations of composition, time reversal, input-output
reversal, etc. The classic theorems and constructions, such as undecidability
of certain questions about context-free languages or the existence of universal
Turing machines, fall out as simple corollaries of their more general forms.
As important as the generality of this model is its applicability. Well-defined
notions of simulation, representation, and composition provide the tools to
design, explain, and validate compilers, interpreters, parsers, lexical
analyzers, and other standard software components.
Several years ago I began writing course notes based on this treatment. It
differs from writing a research for a refereed journal, or another slightly
improved version of Hopcroft and Ullman, because to be useful it must be
clearly intelligible to a significant fraction of our undergraduates. The
task resembles going from teaching arithmetic to teaching group theory,
in an environment where an undergraduate who knows group theory can apply
it in the marketplace.
So when will the book be finished? I wish I knew, but I now have a collaborator,
my ex-TA and ex-student Richard Beigel, who just won a Young Presidential
Investigator award and has been teaching device-based computability out of my
notes and his own at Johns Hopkins. The definitions and theorems are
pretty well established, and my most recent class has seen almost of of them.
We need the framework of explanation, examples, and exercises to support
the typical computer science undergraduate through the development of the
subject. I asked Beigel to help because he does these things well.
\bye